2 research outputs found

    Data driven approach to sparsification of reaction diffusion complex network systems

    Full text link
    Graph sparsification is an area of interest in computer science and applied mathematics. Sparsification of a graph, in general, aims to reduce the number of edges in the network while preserving specific properties of the graph, like cuts and subgraph counts. Computing the sparsest cuts of a graph is known to be NP-hard, and sparsification routines exists for generating linear sized sparsifiers in almost quadratic running time O(n2+ϵ)O(n^{2 + \epsilon}). Consequently, obtaining a sparsifier can be a computationally demanding task and the complexity varies based on the level of sparsity required. In this study, we extend the concept of sparsification to the realm of reaction-diffusion complex systems. We aim to address the challenge of reducing the number of edges in the network while preserving the underlying flow dynamics. To tackle this problem, we adopt a relaxed approach considering only a subset of trajectories. We map the network sparsification problem to a data assimilation problem on a Reduced Order Model (ROM) space with constraints targeted at preserving the eigenmodes of the Laplacian matrix under perturbations. The Laplacian matrix (L=D−AL = D - A) is the difference between the diagonal matrix of degrees (DD) and the graph's adjacency matrix (AA). We propose approximations to the eigenvalues and eigenvectors of the Laplacian matrix subject to perturbations for computational feasibility and include a custom function based on these approximations as a constraint on the data assimilation framework. We demonstrate the extension of our framework to achieve sparsity in parameter sets for Neural Ordinary Differential Equations (neural ODEs)

    Reinforcing POD-based model reduction techniques in reaction-diffusion complex networks using stochastic filtering and pattern recognition

    Full text link
    Complex networks are used to model many real-world systems. However, the dimensionality of these systems can make them challenging to analyze. Dimensionality reduction techniques like POD can be used in such cases. However, these models are susceptible to perturbations in the input data. We propose an algorithmic framework that combines techniques from pattern recognition (PR) and stochastic filtering theory to enhance the output of such models. The results of our study show that our method can improve the accuracy of the surrogate model under perturbed inputs. Deep Neural Networks (DNNs) are susceptible to adversarial attacks. However, recent research has revealed that Neural Ordinary Differential Equations (neural ODEs) exhibit robustness in specific applications. We benchmark our algorithmic framework with the neural ODE-based approach as a reference.Comment: 19 pages, 6 figure
    corecore